AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Efficient Fine-Tuning Optimization

# Efficient Fine-Tuning Optimization

Philosophy Model
Apache-2.0
This is a Mistral-7B instruction fine-tuned model optimized using Unsloth and Huggingface TRL library, achieving 2x training speed improvement
Large Language Model Transformers English
P
raak-16
54
2
Gemma 3 12b Pt Unsloth Bnb 4bit
Gemma 3 is a lightweight, advanced open model series launched by Google, built on the same research technology as Gemini, supporting multimodal input and text output.
Text-to-Image Transformers English
G
unsloth
1,286
1
Llama 3.2 11B Vision OCR
Apache-2.0
Llama 3.2-11B vision-instruction model optimized with Unsloth, 4-bit quantized version, training speed increased by 2x
Large Language Model Transformers English
L
Swapnik
80
1
Llama 3.2 90B Vision Instruct Unsloth Bnb 4bit
Meta Llama 3.2 series 90B-parameter multimodal large language model supporting visual instruction understanding, optimized with Unsloth dynamic 4-bit quantization
Text-to-Image Transformers English
L
unsloth
58
2
Llama 3.2 11B Vision Radiology Mini
Apache-2.0
A radiology image-assisted interpretation model fine-tuned based on unsloth/Llama-3.2-11B-Vision-Instruct, with optimized runtime speed doubled
Image-to-Text Transformers English
L
0llheaven
885
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase